26 research outputs found
Trustworthiness in Virtual Organizations
This study examines perceptions of human trustworthiness as a key component in countering insider threats. The term insider threat refers to situations where a critical member of an organization behaves against the interests of the organization, in an illegal and/or unethical manner. Identifying how an individualâs behavior varies over time - and how anomalous behavior can be detected - are important elements in the preventive control of insider threat behaviors in securing cyber infrastructure. Using online team-based game-playing, this study seeks to re-create realistic insider threat situations in which human âsensorsâ can observe changes in a targetâs behavior. The intellectual merit of this socio-technical study lies in its capability to tackle complex insider threat problems by adopting a social psychological theory on predicting human trustworthiness in a virtual collaborative environment. The study contributes to a theoretical framework of trustworthiness attribution, and gameplaying methodology to predict the occurrence of malfeasance
Leader Member Exchange: An Interactive Framework to Uncover a Deceptive Insider as Revealed by Human Sensors
This study intends to provide a theoretical ground that conceptualizes the prospect of detecting insider threats based on leader-member exchange. This framework specifically corresponds to two propositions raised by Ho, Kaarst-Brown et al. [42]. Team members that are geographically co-located or dispersed are analogized as human sensors in social networks with the ability to collectively âreactâ to deception, even when the act of deception itself is not obvious to any one member. Close interactive relationships are the key to afford a network of human sensors an opportunity to formulate baseline knowledge of a deceptive insider. The research hypothesizes that groups unknowingly impacted by a deceptive leader are likely to use certain language-action cues when interacting with each other after a leader violates group trust
Cyber Forensics on Internet of Things: Slicing and Dicing Raspberry Pi
Any device can now connect to the Internet, and Raspberry Pi is one of the more popular applications, enabling single-board computers to make robotics, devices, and appliances part of the Internet of Things (IoT). The low cost and customizability of Raspberry Pi makes it easily adopted and widespread. Unfortunately, the unprotected Raspberry Pi deviceâwhen connected to the Internetâalso paves the way for cyber-attacks. Our ability to investigate, collect, and validate digital forensic evidence with confidence using Raspberry Pi has become important. This article discusses and presents techniques and methodologies for the investigation of timestamp variations between different Raspberry Pi ext4 filesystems (Raspbian vs. UbuntuMATE), comparing forensic evidence with that of other ext4 filesystems (i.e., Ubuntu), based on interactions within a private cloud, as well as a public cloud. Sixteen observational principles of file operations were documented to assist in our understanding of Raspberry Piâs behavior in the cloud environments. This study contributes to IoT forensics for law enforcement in cybercrime investigations
Computer-Mediated Deception: Collective Language-action Cues as Stigmergic Signals for Computational Intelligence
Collective intelligence is easily observable in group-based or interpersonal pairwise interaction, and is enabled by environment-mediated stigmertic signals. Based on innate ability, human sensors not only sense and coordinate, but also tend to solve problems through these signals. This paper argues the efficacy of computational intelligence for adopting the collective language-action cues of human intelligence as stigmertic signals to differentiate deception. A study was conducted in synchronous computer-mediated communication environment with a dataset collected from 2014 to 2015. An online game was developed to examine the accuracy of certain language-action cues (signs), deceptive actors (agents) during pairwise interaction (environment). The result of a logistic regression analysis demonstrates the computational efficacy of collective language-action cues in differentiating and sensing deception in spontaneous communication. This study contributes to the computational modeling in adapting human intelligence as a base to attribute computer-mediated deception
Resilience of Society to Recognize Disinformation: Human and/or Machine Intelligence
The paper conceptualizes the societal impacts of disinformation in hopes of developing a computational approach that can identify disinformation in order to strengthen social resilience. An innovative approach that considers the sociotechnical interaction phenomena of social media is utilized to address and combat disinformation campaigns. Based on theoretical inquiries, this study proposes conducting experiments that capture subjective and objective measures and datasets while adopting machine learning to model how disinformation can be identified computationally. The study particularly will focus on understanding communicative social actions as human intelligence when developing machine intelligence to learn about disinformation that is deliberately misleading, as well as the ways people judge the credibility and truthfulness of information. Previous experiments support the viability of a sociotechnical approach, i.e., connecting subtle language-action cues and linguistic features from human communication with hidden intentions, thus leading to deception detection in online communication. The study intends to derive a baseline dataset and a predictive model and by that to create an information system artefact with the capability to differentiate disinformation
Bridging the Security Gap between Software Developers and Penetration Testers: A Job Characteristic Theory Perspective
Building on Job Characteristics Theory (JCT), this article suggests that job characteristics differ between software developers and penetration testers; and subsequently, this generates different levels of job motivation related to information security protection between these groups. This study proposes a research model based on JCT to examine the differences in job motivation between software developers and penetration testers. Insights gained from the research model can be used to: (1) bridge the security gap between software development and penetration testing for alleviating software vulnerabilities and (2) propose viable suggestions to promote mutual understanding between both professional groups for improving software security. Moving beyond the propositions offered by the research model, this study will design and build a laboratory experiment to capture the actual behaviors related to job motivation
Interdisciplinary Practices in iSchools
Interdisciplinarity is in the DNA of the iSchools. This workshop invites you to discuss how inter-disciplinarity plays out in theory and practice. The workshop addresses the uniqueness of the iSchools, provides an interactive framework to discuss and reflect on interdisciplinary practice. It suggests some models and tools to describe relations between disciplines, while offering a venue to brainstorm and envision issues of interest with like-minded colleagues. The purpose of this workshop is to establish a setting for continuous dialogue among colleagues on how interdisciplinarity plays out in practice. The workshop aims to create a forum for reflection on local inter-disciplinary practice(s) and to consider the possibilities of forming research networks. The workshop opens with a panel presentation from iSchool deans and senior faculty discussing current interdisciplinarity practices in iSchools and with presentations that address theoretical frameworks of interdisciplinarity. These presentations will form the basis for small group discussions in the afternoon
Towards A Unified Utilitarian Ethics Framework for Healthcare Artificial Intelligence
Artificial Intelligence (AI) aims to elevate healthcare to a pinnacle by
aiding clinical decision support. Overcoming the challenges related to the
design of ethical AI will enable clinicians, physicians, healthcare
professionals, and other stakeholders to use and trust AI in healthcare
settings. This study attempts to identify the major ethical principles
influencing the utility performance of AI at different technological levels
such as data access, algorithms, and systems through a thematic analysis. We
observed that justice, privacy, bias, lack of regulations, risks, and
interpretability are the most important principles to consider for ethical AI.
This data-driven study has analyzed secondary survey data from the Pew Research
Center (2020) of 36 AI experts to categorize the top ethical principles of AI
design. To resolve the ethical issues identified by the meta-analysis and
domain experts, we propose a new utilitarian ethics-based theoretical framework
for designing ethical AI for the healthcare domain
Sociotechnical systems research: Defining, converging, and researching as a community
The Consortium for the Science of Sociotechnical Systems (CSST) serves as a trans-discipline community, connecting like-minded scholars from many different intellectual communities. CSST brings together researchers from a wide range of disciplines to develop a common language and scholarly repertoire as we work to understand diverse sociotechnical issues. Researchers focus on improving human lives through understanding sociotechnical systems, conducting research on human activity such as collaboration, creativity, learning and economic production in domains like healthcare, education, science, leisure, and computing. This requires researchers to understand both social and technical aspects of human organization. This workshop supports continued advancement of definitions and boundaries in this area. We will engage in activities with established leaders as well as newcomers in this trans-discipline, to build understanding of factors that support the communityâs cohesion, and, aim to leverage the diversity of the work being conducted by its members, to engender learning and research innovation.published or submitted for publicationis peer reviewe
Behavioral Parameters of Trustworthiness for Countering Insider Threats
This proposal is intended to examine human trustworthiness as a key component for
countering insider threats in the arena of corporate personnel security. Employees with
access and authority have the most potential to cause damage to that information, to
organizational reputation, or to the operational stability of the organization. I am
interested in studying the basic mechanisms of how to detect changes in the
trustworthiness of an individual who holds a key position in an organization, by
observing overt behavior ??? including communications behavior ??? over time. Rotter
(1980) defines trust as a generalized expectancy - held by an individual or a group -
which the communications of another (individual or group) can be relied upon. In this
investigation, ???trustworthiness??? is defined as the degree of correspondence between
communicated intentions and behavioral outcomes that are observed over time (Rotter,
1980 & 1967). The degree of correspondence between the target???s words and actions
remain reliable, ethical and consistent, which its degree of fluctuation does not exceed
observer???s expectations over time (Hardin, 1996). To be able to tell if the employee is
trustworthy is thus determined by the subjective perceptions from individuals in his/her
social network that have direct business functional connections, and thus the opportunity
to repeatedly observe the correspondence between communications and behavior. This
study adopts the concept of correlating data-centric attributions, as observed changes in
behavior from human perceptions; as analogous to ???sensors??? on the network. The
Attribution Theory is adopted in the experimental situations (the ???leader???s dilemma???
game) to extract indirect perceptions of trustworthiness toward a critical worker over time
in a group dynamics (Kelley, 1973). The principles of distinctiveness, consensus and
consistancy are applied in these experimental situations